28 research outputs found

    Modeling Expertise in Assistive Navigation Interfaces for Blind People

    Get PDF
    Evaluating the impact of expertise and route knowledge on task performance can guide the design of intelligent and adaptive navigation interfaces. Expertise has been relatively unexplored in the context of assistive indoor navigation interfaces for blind people. To quantify the complex relationship between the user's walking patterns, route learning, and adaptation to the interface, we conducted a study with 8 blind participants. The participants repeated a set of navigation tasks while using a smartphone-based turn-by-turn navigation guidance app. The results demonstrate the gradual evolution of user skill and knowledge throughout the route repetitions, significantly impacting the task completion time. In addition to the exploratory analysis, we take a step towards tailoring the navigation interface to the user's needs by proposing a personalized recurrent neural net work-based behavior model for expertise level classification

    Probabilistic Guarantees for Safe Deep Reinforcement Learning

    Full text link
    Deep reinforcement learning has been successfully applied to many control tasks, but the application of such agents in safety-critical scenarios has been limited due to safety concerns. Rigorous testing of these controllers is challenging, particularly when they operate in probabilistic environments due to, for example, hardware faults or noisy sensors. We propose MOSAIC, an algorithm for measuring the safety of deep reinforcement learning agents in stochastic settings. Our approach is based on the iterative construction of a formal abstraction of a controller's execution in an environment, and leverages probabilistic model checking of Markov decision processes to produce probabilistic guarantees on safe behaviour over a finite time horizon. It produces bounds on the probability of safe operation of the controller for different initial configurations and identifies regions where correct behaviour can be guaranteed. We implement and evaluate our approach on agents trained for several benchmark control problems

    End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners

    Full text link
    For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather/illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: 1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and 2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: 1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and 2) route planners help the driving task significantly, especially for steering angle prediction.Comment: to be published at ECCV 201

    An assigned responsibility system for robotic teleoperation control

    Get PDF
    This paper proposes an architecture that explores a gap in the spectrum of existing strategies for robot control mode switching in adjustable autonomy. In situations where the environment is reasonably known and/or predictable, pre-planning these control changes could relieve robot operators of the additional task of deciding when and how to switch. Such a strategy provides a clear division of labour between the automation and the human operator(s) before the job even begins, allowing for individual responsibilities to be known ahead of time, limiting confusion and allowing rest breaks to be planned. Assigned Responsibility is a new form of adjustable autonomy-based teleoperation that allows the selective inclusion of automated control elements at key stages of a robot operation plan’s execution. Progression through these stages is controlled by automatic goal accomplishment tracking. An implementation is evaluated through engineering tests and a usability study, demonstrating the viability of this approach and offering insight into its potential applications

    Insassen-Körperposenerfassung fĂŒr Innenraum-Assistenzsysteme der Zukunft

    No full text

    Impact of Expertise on Interaction Preferences for Navigation Assistance of Visually Impaired Individuals

    No full text
    Navigation assistive technologies have been designed to support individuals with visual impairments during independent mobility by providing sensory augmentation and contextual awareness of their surroundings. Such information is habitually provided through predefned audio-haptic interaction paradigms. However, individual capabilities, preferences and behavior of people with visual impairments are heterogeneous, and may change due to experience, context and necessity. Therefore, the circumstances and modali-ties for providing navigation assistance need to be personalized to different users, and through time for each user. We conduct a study with 13 blind participants to explore how the desirability of messages provided during assisted navigation varies based on users' navigation preferences and expertise. The participants are guided through two different routes, one without prior knowledge and one previously studied and traversed. The guid-ance is provided through turn-by-turninstructions, enriched with contextualinformation about the environment. During navigation and follow-up interviews, we uncover that participants have diver-sifed needs for navigation instructions based on their abilities and preferences. Our study motivates the design of future navigation systems capableofverbositylevel personalizationin ordertokeepthe users engaged in the current situational context while minimizing distractions

    Virtual Navigation for Blind People: Transferring Route Knowledge to the Real-World

    No full text
    Independent navigation is challenging for blind people, particularly in unfamiliar environments. Navigation assistive technologies try to provide additional support by guiding users or increasing their knowledge of the surroundings, but accurate solutions are still not widely available. Based on this limitation and on the fact that spatial knowledge can also be acquired indirectly (prior to navigation), we developed an interactive virtual navigation app where users can learn unfamiliar routes before physically visiting the environment. Our main research goals are to understand the acquisition of route knowledge through smartphone-based virtual navigation and how it evolves over time; its ability to support independent, unassisted real-world navigation of short routes; and its ability to improve user performance when using an accurate in-situ navigation tool (NavCog). With these goals in mind, we conducted a user study where 14 blind participants virtually learned routes at home for three consecutive days and then physically navigated them, both unassisted and with NavCog. In virtual navigation, we analyzed the evolution of route knowledge and we found that participants were able to quickly learn shorter routes and gradually increase their knowledge in both short and long routes. In the real-world, we found that users were able to take advantage of this knowledge, acquired completely through virtual navigation, to complete unassisted navigation tasks. When using NavCog, users tend to rely on the navigation system and less on their prior knowledge and therefore virtual navigation did not significantly improve users' performance

    Vehicle Activity Recognition Using DCNN

    No full text
    Abstract. This paper presents a novel Deep Convolutional Neural Net work (DCNN) method for vehicle activity classification. We extend our previous approach to be able to classify a larger number of vehicle trajec tories in a single network.We also highlight the fl exibility of our approach in integrating further scenarios to our classifier. Firstly, a spatiotempo ral calculus method is used to encode the relative movement between vehicles as a trajectory of QTC states. We then map the encoded trajectory to a 2D matrix using the one-hot vector mapping, this preserves the important positional data and order for each QTC state. To do this we associate the QTC sequences with pixels to form a 2D image tex ture. Afterwards, we adapted trained CNN architecture into our vehicles activity recognition task. Two separate types of driving data sets are used to evaluate our method. We demonstrate that the proposed method out-performs existing techniques. Along with the proposed approach we created a new dataset of vehicles interactions. Although the focus of this paper is on the automated analysis of vehicle interactions, the proposed technique is general and can be applied for pairwise analysis for moving objects
    corecore